28 research outputs found

    Toward lifelong visual localization and mapping

    Get PDF
    Thesis (Ph.D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-181).Mobile robotic systems operating over long durations require algorithms that are robust and scale efficiently over time as sensor information is continually collected. For mobile robots one of the fundamental problems is navigation; which requires the robot to have a map of its environment, so it can plan its path and execute it. Having the robot use its perception sensors to do simultaneous localization and mapping (SLAM) is beneficial for a fully autonomous system. Extending the time horizon of operations poses problems to current SLAM algorithms, both in terms of robustness and temporal scalability. To address this problem we propose a reduced pose graph model that significantly reduces the complexity of the full pose graph model. Additionally we develop a SLAM system using two different sensor modalities: imaging sonars for underwater navigation and vision based SLAM for terrestrial applications. Underwater navigation is one application domain that benefits from SLAM, where access to a global positioning system (GPS) is not possible. In this thesis we present SLAM systems for two underwater applications. First, we describe our implementation of real-time imaging-sonar aided navigation applied to in-situ autonomous ship hull inspection using the hovering autonomous underwater vehicle (HAUV). In addition we present an architecture that enables the fusion of information from both a sonar and a camera system. The system is evaluated using data collected during experiments on SS Curtiss and USCGC Seneca. Second, we develop a feature-based navigation system supporting multi-session mapping, and provide an algorithm for re-localizing the vehicle between missions. In addition we present a method for managing the complexity of the estimation problem as new information is received. The system is demonstrated using data collected with a REMUS vehicle equipped with a BlueView forward-looking sonar. The model we use for mapping builds on the pose graph representation which has been shown to be an efficient and accurate approach to SLAM. One of the problems with the pose graph formulation is that the state space continuously grows as more information is acquired. To address this problem we propose the reduced pose graph (RPG) model which partitions the space to be mapped and uses the partitions to reduce the number of poses used for estimation. To evaluate our approach, we present results using an online binocular and RGB-Depth visual SLAM system that uses place recognition both for robustness and multi-session operation. Additionally, to enable large-scale indoor mapping, our system automatically detects elevator rides based on accelerometer data. We demonstrate long-term mapping using approximately nine hours of data collected in the MIT Stata Center over the course of six months. Ground truth, derived by aligning laser scans to existing floor plans, is used to evaluate the global accuracy of the system. Our results illustrate the capability of our visual SLAM system to map a large scale environment over an extended period of time.by Hordur Johannsson.Ph.D

    Toward autonomous harbor surveillance

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Includes bibliographical references (p. 105-113).In this thesis we address the problem of drift-free navigation for underwater vehicles performing harbor surveillance and ship hull inspection. Maintaining accurate localization for the duration of a mission is important for a variety of tasks, such as planning the vehicle trajectory and ensuring coverage of the area to be inspected. Our approach uses only onboard sensors in a simultaneous localization and mapping setting and removes the need for any external infrastructure like acoustic beacons. We extract dense features from a forward-looking imaging sonar and apply pair-wise registration between sonar frames. The registrations are combined with onboard velocity, attitude and acceleration sensors to obtain an improved estimate of the vehicle trajectory. In addition, an architecture for a persistent mapping is proposed. With the intention of handling long term operations and repetitive surveillance tasks. The proposed architecture is flexible and supports different types of vehicles and mapping methods. The design of the system is demonstrated with an implementation of some of the key features of the system. In addition, methods for re-localization are considered. Finally, results from several experiments that demonstrate drift-free navigation in various underwater environments are presented.by Hordur Johannsson.S.M

    Efficient scene simulation for robust monte carlo localization using an RGB-D camera

    Get PDF
    This paper presents Kinect Monte Carlo Localization (KMCL), a new method for localization in three dimensional indoor environments using RGB-D cameras, such as the Microsoft Kinect. The approach makes use of a low fidelity a priori 3-D model of the area of operation composed of large planar segments, such as walls and ceilings, which are assumed to remain static. Using this map as input, the KMCL algorithm employs feature-based visual odometry as the particle propagation mechanism and utilizes the 3-D map and the underlying sensor image formation model to efficiently simulate RGB-D camera views at the location of particle poses, using a graphical processing unit (GPU). The generated 3D views of the scene are then used to evaluate the likelihood of the particle poses. This GPU implementation provides a factor of ten speedup over a pure distance-based method, yet provides comparable accuracy. Experimental results are presented for five different configurations, including: (1) a robotic wheelchair, (2) a sensor mounted on a person, (3) an Ascending Technologies quadrotor, (4) a Willow Garage PR2, and (5) an RWI B21 wheeled mobile robot platform. The results demonstrate that the system can perform robust localization with 3D information for motions as fast as 1.5 meters per second. The approach is designed to be applicable not just for robotics but other applications such as wearable computing

    Efficient AUV Navigation Fusing Acoustic Ranging and Side-scan Sonar

    Get PDF
    This paper presents an on-line nonlinear least squares algorithm for multi-sensor autonomous underwater vehicle (AUV) navigation. The approach integrates the global constraints of range to and GPS position of a surface vehicle or buoy communicated via acoustic modems and relative pose constraints arising from targets detected in side-scan sonar images. The approach utilizes an efficient optimization algorithm, iSAM, which allows for consistent on-line estimation of the entire set of trajectory constraints. The optimized trajectory can then be used to more accurately navigate the AUV, to extend mission duration, and to avoid GPS surfacing. As iSAM provides efficient access to the marginal covariances of previously observed features, automatic data association is greatly simplified — particularly in sparse marine environments. A key feature of our approach is its intended scalability to single surface sensor (a vehicle or buoy) broadcasting its GPS position and simultaneous one-way travel time range (OWTT) to multiple AUVs. We discuss why our approach is scalable as well as robust to modem transmission failure. Results are provided for an ocean experiment using a Hydroid REMUS 100 AUV co-operating with one of two craft: an autonomous surface vehicle (ASV) and a manned support vessel. During these experiments the ranging portion of the algorithm ran online on-board the AUV. Extension of the paradigm to multiple missions via the optimization of successive survey missions (and the resultant sonar mosaics) is also demonstrated.United States. Office of Naval Research (Grant N000140711102

    Robust Tracking for Real-Time Dense RGB-D Mapping with Kintinuous

    Get PDF
    This paper describes extensions to the Kintinuous algorithm for spatially extended KinectFusion, incorporating the following additions: (i) the integration of multiple 6DOF camera odometry estimation methods for robust tracking; (ii) a novel GPU-based implementation of an existing dense RGB-D visual odometry algorithm; (iii) advanced fused real-time surface coloring. These extensions are validated with extensive experimental results, both quantitative and qualitative, demonstrating the ability to build dense fully colored models of spatially extended environments for robotics and virtual reality applications while remaining robust against scenes with challenging sets of geometric and visual features

    Kintinuous: Spatially Extended KinectFusion

    Get PDF
    In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system's ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation

    Towards Autonomous Ship Hull Inspection using the Bluefin HAUV

    Get PDF
    URL is to paper listed on conference scheduleIn this paper we describe our effort to automate ship hull inspection for security applications. Our main contribution is a system that is capable of drift-free self-localization on a ship hull for extended periods of time. Maintaining accurate localization for the duration of a mission is important for navigation and for ensuring full coverage of the area to be inspected. We exclusively use onboard sensors including an imaging sonar to correct for drift in the vehicle’s navigation sensors. We present preliminary results from online experiments on a ship hull. We further describe ongoing work including adding capabilities for change detection by aligning vehicle trajectories of different missions based on a technique recently developed in our lab.United States. Office of Naval Research (grant N00014-06- 10043

    Temporally Scalable Visual SLAM using a Reduced Pose Graph

    Get PDF
    Abstract — In this paper, we demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that maintain static keyframes, our approach uses new measurements to continually improve the map, yet achieves efficiency by avoiding adding redundant frames and not using marginalization to reduce the graph. To evaluate our approach, we present results using an online binocular visual SLAM system that uses place recognition for both robustness and multi-session operation. Additionally, to enable large-scale indoor mapping, our system automatically detects elevator rides based on accelerometer data. We demonstrate long-term mapping in a large multi-floor building, using approximately nine hours of data collected over the course of six months. Our results illustrate the capability of our visual SLAM system to map a large area over extended period of time. I

    Sensor fusion for flexible human-portable building-scale mapping

    Get PDF
    This paper describes a system enabling rapid multi-floor indoor map building using a body-worn sensor system fusing information from RGB-D cameras, LIDAR, inertial, and barometric sensors. Our work is motivated by rapid response missions by emergency personnel, in which the capability for one or more people to rapidly map a complex indoor environment is essential for public safety. Human-portable mapping raises a number of challenges not encountered in typical robotic mapping applications including complex 6-DOF motion and the traversal of challenging trajectories including stairs or elevators. Our system achieves robust performance in these situations by exploiting state-of-the-art techniques for robust pose graph optimization and loop closure detection. It achieves real-time performance in indoor environments of moderate scale. Experimental results are demonstrated for human-portable mapping of several floors of a university building, demonstrating the system's ability to handle motion up and down stairs and to organize initially disconnected sets of submaps in a complex environment.Lincoln LaboratoryUnited States. Air Force (Contract FA8721-05-C-0002)United States. Office of Naval Research (Grant N00014-10-1-0936)United States. Office of Naval Research (Grant N00014-11-1-0688)United States. Office of Naval Research (Grant N00014-12-10020
    corecore